174 research outputs found

    Multimodal Comprehension in Left Hemisphere Stroke Patients

    Get PDF
    Hand gestures, imagistically related to the content of speech, are ubiquitous in face-to-face communication. Here we investigated people with aphasia’s (PWA) processing of speech accompanied by gestures using lesion-symptom mapping. Twenty-nine PWA and 15 matched controls were shown a picture of an object/action and then a video-clip of a speaker producing speech and/or gestures in one of the following combinations: speech-only, gesture-only, congruent speech-gesture, and incongruent speech-gesture. Participants’ task was to indicate, in different blocks, whether the picture and the word matched (speech task), or whether the picture and the gesture matched (gesture task). Multivariate lesion analysis with Support Vector Regression Lesion-Symptom Mapping (SVR-LSM) showed that benefit for congruent speech-gesture was associated with 1) lesioned voxels in anterior fronto-temporal regions including inferior frontal gyrus (IFG), and sparing of posterior temporal cortex and lateral temporal-occipital regions (pTC/LTO) for the speech task, and 2) conversely, lesions to pTC/LTO and sparing of anterior regions for the gesture task. The two tasks did not share overlapping voxels. Costs from incongruent speech-gesture pairings were associated with lesioned voxels in these same anterior (for the speech task) and posterior (for the gesture task) regions, but crucially, also shared voxels in superior temporal gyri (STG) and middle temporal gyri (MTG), including the anterior temporal lobe. These results suggest that IFG and pTC/LTO contribute to extracting semantic information from speech and gesture, respectively; however, they are not causally involved in integrating information from the two modalities. In contrast, regions in anterior STG/MTG are associated with performance in both tasks and may thus be critical to speech-gesture integration. These conclusions are further supported by associations between performance in the experimental tasks and performance in tests assessing lexical-semantic processing and gesture recognition

    A kinematic examination of dual-route processing for action imitation

    Get PDF
    The dual-route model of imitation suggests that meaningful and meaningless actions are processed through either an indirect or direct route, respectively. Evidence indicates that the direct route is more cognitively demanding, since it relies on mapping visuospatial properties of the observed action on to a performed one. These cognitive demands might negatively influence reaction time and accuracy for actions performed following a meaningless action under time constraints. However, how meaningful and meaningless action imitation processing is reflected in movement kinematics is not yet clear. We wanted to confirm whether meaningless action performance incurs a reaction time cost, whether the cost is reflected in kinematics, and, more generally, to examine kinematic markers of emblematic meaningful and meaningless action imitation. We examined participants’ reaction time and wrist movements when they imitated emblematic meaningful or matched meaningless gestures in either blocks of the same action type, or mixed blocks. Meaningless actions were associated with a greater correction period at the end of the movement, possibly reflecting a strategy designed to ensure accurate completion for less familiar actions under time constraints. Furthermore, in mixed blocks, trials following meaningless actions had a significantly increased reaction time, supporting previous claims that route selection for action imitation may be stimulus-driven. However, there was only convincing evidence for this effect with an interval of ~2948ms, but not ~3573ms or ~2553ms, between movements. Future work motion-tracking the entire hand to assess imitation accuracy, and more closely examining the influence of duration between movements, may help to explain these effects

    Conceptual knowledge for understanding other’s actions is organized primarily around action goals

    Get PDF
    Semantic knowledge about objects entails both knowing how to grasp an object (grip-related knowledge) and what to do with an object (goal-related knowledge). Considerable evidence suggests a hierarchical organization in which specific hand-grips in action execution are most often selected to accomplish a remote action goal. The present study aimed to investigate whether a comparable hierarchical organization of semantic knowledge applies to the recognition of other’s object-directed actions as well. Correctness of either the Grip (hand grip applied to the object) or the Goal (end-location at which an object was directed) were manipulated independently in two experiments. In Experiment 1, subjects were required to attend selectively to either the correctness of the grip or the goal of the observed action. Subjects were faster when attending to the goal of the action and a strong interference of goal-violations was observed when subjects attended to the grip of the action. Importantly, observation of irrelevant goal- or grip-related violations interfered with making decisions about the correctness of the relevant dimension only when the relevant dimension was correct. In contrast, in Experiment 2, when subjects attended to an action-irrelevant stimulus dimension (i.e. orientation of the object), no interference of goal- or grip-related violations was found, ruling out the possibility that interference-effects result from perceptual differences between stimuli. These findings suggest that understanding the correctness of an action selectively recruits specialized, but interacting networks, processing the correctness of goal- and grip-specific information during action observation

    Apraxia and motor dysfunction in corticobasal syndrome

    Get PDF
    Background: Corticobasal syndrome (CBS) is characterized by multifaceted motor system dysfunction and cognitive disturbance; distinctive clinical features include limb apraxia and visuospatial dysfunction. Transcranial magnetic stimulation (TMS) has been used to study motor system dysfunction in CBS, but the relationship of TMS parameters to clinical features has not been studied. The present study explored several hypotheses; firstly, that limb apraxia may be partly due to visuospatial impairment in CBS. Secondly, that motor system dysfunction can be demonstrated in CBS, using threshold-tracking TMS, and is linked to limb apraxia. Finally, that atrophy of the primary motor cortex, studied using voxel-based morphometry analysis (VBM), is associated with motor system dysfunction and limb apraxia in CBS.   Methods: Imitation of meaningful and meaningless hand gestures was graded to assess limb apraxia, while cognitive performance was assessed using the Addenbrooke's Cognitive Examination - Revised (ACE-R), with particular emphasis placed on the visuospatial subtask. Patients underwent TMS, to assess cortical function, and VBM.   Results: In total, 17 patients with CBS (7 male, 10 female; mean age 64.4+/2 6.6 years) were studied and compared to 17 matched control subjects. Of the CBS patients, 23.5% had a relatively inexcitable motor cortex, with evidence of cortical dysfunction in the remaining 76.5% patients. Reduced resting motor threshold, and visuospatial performance, correlated with limb apraxia. Patients with a resting motor threshold <50% performed significantly worse on the visuospatial sub-task of the ACE-R than other CBS patients. Cortical function correlated with atrophy of the primary and pre-motor cortices, and the thalamus, while apraxia correlated with atrophy of the pre-motor and parietal cortices.   Conclusions: Cortical dysfunction appears to underlie the core clinical features of CBS, and is associated with atrophy of the primary motor and pre-motor cortices, as well as the thalamus, while apraxia correlates with pre-motor and parietal atrophy

    The prognosis of allocentric and egocentric neglect : evidence from clinical scans

    Get PDF
    We contrasted the neuroanatomical substrates of sub-acute and chronic visuospatial deficits associated with different aspects of unilateral neglect using computed tomography scans acquired as part of routine clinical diagnosis. Voxel-wise statistical analyses were conducted on a group of 160 stroke patients scanned at a sub-acute stage. Lesion-deficit relationships were assessed across the whole brain, separately for grey and white matter. We assessed lesions that were associated with behavioural performance (i) at a sub-acute stage (within 3 months of the stroke) and (ii) at a chronic stage (after 9 months post stroke). Allocentric and egocentric neglect symptoms at the sub-acute stage were associated with lesions to dissociated regions within the frontal lobe, amongst other regions. However the frontal lesions were not associated with neglect at the chronic stage. On the other hand, lesions in the angular gyrus were associated with persistent allocentric neglect. In contrast, lesions within the superior temporal gyrus extending into the supramarginal gyrus, as well as lesions within the basal ganglia and insula, were associated with persistent egocentric neglect. Damage within the temporo-parietal junction was associated with both types of neglect at the sub-acute stage and 9 months later. Furthermore, white matter disconnections resulting from damage along the superior longitudinal fasciculus were associated with both types of neglect and critically related to both sub-acute and chronic deficits. Finally, there was a significant difference in the lesion volume between patients who recovered from neglect and patients with chronic deficits. The findings presented provide evidence that (i) the lesion location and lesion size can be used to successfully predict the outcome of neglect based on clinical CT scans, (ii) lesion location alone can serve as a critical predictor for persistent neglect symptoms, (iii) wide spread lesions are associated with neglect symptoms at the sub-acute stage but only some of these are critical for predicting whether neglect will become a chronic disorder and (iv) the severity of behavioural symptoms can be a useful predictor of recovery in the absence of neuroimaging findings on clinical scans. We discuss the implications for understanding the symptoms of the neglect syndrome, the recovery of function and the use of clinical scans to predict outcome

    Effects of lower limb amputation on the mental rotation of feet

    Get PDF
    What happens to the mental representation of our body when the actual anatomy of our body changes? We asked 18 able-bodied controls, 18 patients with a lower limb amputation and a patient with rotationplasty to perform a laterality judgment task. They were shown illustrations of feet in different orientations which they had to classify as left or right limb. This laterality recognition task, originally introduced by Parsons in Cognit Psychol 19:178–241, (1987), is known to elicit implicit mental rotation of the subject’s own body part. However, it can also be solved by mental transformation of the visual stimuli. Despite the anatomical changes in the body periphery of the amputees and of the rotationplasty patient, no differences in their ability to identify illustrations of their affected versus contralateral limb were found, while the group of able-bodied controls showed clear laterality effects. These findings are discussed in the context of various strategies for mental rotation versus the maintenance of an intact prototypical body structural description

    Visual detail about the body modulates tactile localisation biases

    Get PDF
    The localisation of tactile stimuli requires the integration of visual and somatosensory inputs within an internal representation of the body surface, and is prone to consistent bias. Joints may play a role in segmenting such internal body representations, and may therefore influence tactile localisation biases, although the nature of this influence remains unclear. Here, we investigate the relationship between conceptual knowledge of joint locations and tactile localisation biases on the hand. In one task, participants localised tactile stimuli applied to the dorsum of their hand. A distal localisation bias was observed in all participants, consistent with previous results. We also manipulated the availability of visual information during this task, to determine whether the absence of this information could account for the distal bias observed here and by Mancini and colleagues (2011). The observed distal bias increased in magnitude when visual information was restricted, without a corresponding decrease in precision. In a separate task, the same participants indicated, from memory, knuckle locations on a silhouette image of their hand. Analogous distal biases were also seen in the knuckle localisation task. The accuracy of conceptual joint knowledge was not correlated with tactile localisation bias magnitude, although a similarity in observed bias direction suggests that both tasks may rely on a common, higher-order body representation. These results also suggest that distortions of conceptual body representation may be more common in healthy individuals than previously thought

    Know Thyself: Behavioral Evidence for a Structural Representation of the Human Body

    Get PDF
    Background: Representing one's own body is often viewed as a basic form of self-awareness. However, little is known about structural representations of the body in the brain.Methods and Findings: We developed an inter-manual version of the classical "in-between'' finger gnosis task: participants judged whether the number of untouched fingers between two touched fingers was the same on both hands, or different. We thereby dissociated structural knowledge about fingers, specifying their order and relative position within a hand, from tactile sensory codes. Judgments following stimulation on homologous fingers were consistently more accurate than trials with no or partial homology. Further experiments showed that structural representations are more enduring than purely sensory codes, are used even when number of fingers is irrelevant to the task, and moreover involve an allocentric representation of finger order, independent of hand posture.Conclusions: Our results suggest the existence of an allocentric representation of body structure at higher stages of the somatosensory processing pathway, in addition to primary sensory representation
    • …
    corecore